在地质不确定性下,快速同化监测数据以更新压力累积和压力累积和二氧化碳(CO2)羽流迁移的预测是地质碳储存中的一个具有挑战性的问题。具有高维参数空间的数据同化的高计算成本阻碍了商业规模库管理的快速决策。我们建议利用具有深度学习技术的多孔介质流动行为的物理理解,以开发快速历史匹配 - 水库响应预测工作流程。应用集合更顺畅的多数据同化框架,工作流程更新地质特性,并通过通过地震反转解释的压力历史和二氧化碳羽毛的量化不确定性来预测水库性能。由于这种工作流程中最具计算昂贵的组件是储层模拟,我们开发了代理模型,以在多孔注射下预测动态压力和CO2羽流量。代理模型采用深度卷积神经网络,具体地,宽的剩余网络和残留的U-Net。该工作流程针对代表碎屑货架沉积环境的扁平三维储层模型验证。智能处理应用于真正的3D储层模型中数量与单层储层模型之间的桥梁。工作流程可以在主流个人工作站上不到一小时内完成历史匹配和储库预测,在不到一小时内。
translated by 谷歌翻译
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US Medical License Exam questions), surpassing prior state-of-the-art by over 17%. However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications.
translated by 谷歌翻译
Managing novelty in perception-based human activity recognition (HAR) is critical in realistic settings to improve task performance over time and ensure solution generalization outside of prior seen samples. Novelty manifests in HAR as unseen samples, activities, objects, environments, and sensor changes, among other ways. Novelty may be task-relevant, such as a new class or new features, or task-irrelevant resulting in nuisance novelty, such as never before seen noise, blur, or distorted video recordings. To perform HAR optimally, algorithmic solutions must be tolerant to nuisance novelty, and learn over time in the face of novelty. This paper 1) formalizes the definition of novelty in HAR building upon the prior definition of novelty in classification tasks, 2) proposes an incremental open world learning (OWL) protocol and applies it to the Kinetics datasets to generate a new benchmark KOWL-718, 3) analyzes the performance of current state-of-the-art HAR models when novelty is introduced over time, 4) provides a containerized and packaged pipeline for reproducing the OWL protocol and for modifying for any future updates to Kinetics. The experimental analysis includes an ablation study of how the different models perform under various conditions as annotated by Kinetics-AVA. The protocol as an algorithm for reproducing experiments using the KOWL-718 benchmark will be publicly released with code and containers at https://github.com/prijatelj/human-activity-recognition-in-an-open-world. The code may be used to analyze different annotations and subsets of the Kinetics datasets in an incremental open world fashion, as well as be extended as further updates to Kinetics are released.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Although prediction models for delirium, a commonly occurring condition during general hospitalization or post-surgery, have not gained huge popularity, their algorithmic bias evaluation is crucial due to the existing association between social determinants of health and delirium risk. In this context, using MIMIC-III and another academic hospital dataset, we present some initial experimental evidence showing how sociodemographic features such as sex and race can impact the model performance across subgroups. With this work, our intent is to initiate a discussion about the intersectionality effects of old age, race and socioeconomic factors on the early-stage detection and prevention of delirium using ML.
translated by 谷歌翻译
现代医疗保健系统正在对电子病历(EMR)进行连续自动监视,以识别频率越来越多的不良事件;但是,许多败血症等事件都没有明确阐明前瞻性(即事件链),可用于识别和拦截它的早期不良事件。目前,尚无可靠的框架来发现或描述不良医院事件之前的因果链。临床上相关和可解释的结果需要一个框架,可以(1)推断在EMR数据中发现的多个患者特征(例如,实验室,生命体征等)中的时间相互作用,并且(2)可以识别(s)的模式(s)。到即将发生的不良事件(例如,败血症)。在这项工作中,我们提出了一个线性多元霍克斯进程模型,并与$ g(x)= x^+$链接函数结合起来允许潜在的抑制作用,以恢复Granger Causal(GC)图。我们开发了一个基于两阶段的方案,以最大程度地提高可能性的替代品以估计问题参数。该两相算法可扩展,并通过我们的数值模拟显示有效。随后将其扩展到佐治亚州亚特兰大的Grady医院系统的患者数据集,在那里,合适的Granger Causal图识别出败血症之前的几个高度可解释的链。
translated by 谷歌翻译
作为网络防御的重要工具,欺骗正在迅速发展,并补充了现有的周边安全措施,以迅速检测出漏洞和数据盗窃。限制欺骗使用的因素之一是手工生成逼真的人工制品的成本。但是,机器学习的最新进展为可扩展的,自动化的现实欺骗创造了机会。本愿景论文描述了开发模型所涉及的机会和挑战,以模仿IT堆栈的许多共同元素以造成欺骗效应。
translated by 谷歌翻译
最近显示外部眼睛照片显示出糖尿病性视网膜疾病和HBA1C升高的迹象。在本文中,我们评估外部眼睛照片是否包含有关其他系统性医疗状况的信息。我们开发了一个深度学习系统(DLS),该系统将外部眼睛的照片作为输入,并预测多个全身参数,例如与肝脏有关的参数(白蛋白,AST);肾脏(EGFR使用无种族的2021 CKD-EPI肌酐方程,尿液ACR);骨与矿物质(钙);甲状腺(TSH);和血数(HGB,WBC,血小板)。开发利用了49,015例糖尿病患者的151,237张图像,在加利福尼亚州洛杉矶县的11个地点接受糖尿病眼镜筛查。评估重点是9个预先指定的全身参数,并利用了3个验证集(a,b,c),涵盖了28,869名患有和没有糖尿病的患者,在加利福尼亚州洛杉矶县和大亚特兰大地区的3个独立地点进行了眼睛筛查。我们将结合了可用临床人口统计学变量的基线模型(例如年龄,性别,种族/种族,糖尿病年)进行了比较。相对于基线,DLS在检测AST> 36,钙<8.6,egfr <60,HGB <11,血小板<150,ACR> = 300和WBC <4时,在检测AST> 36,钙<8.6,Egfr <60,HGB <60,HGB <60,calcium <8.6,Egfr <60,calcium <8.6和wbc <4时,达到了统计学上的显着性能,并且类似于开发集的人口),其中DLS的AUC超过基线的AUC,增长了5.2-19.4%。在验证集B和C方面,与开发集相比,患者人群的差异很大,DLS的表现优于ACR> = 300的基线,而HGB <11升至7.3-13.2%。我们的发现提供了进一步的证据,表明外部眼睛照片包含跨越多器官系统的全身健康生物标志物。需要进一步的工作来研究这些生物标志物是否以及如何转化为临床影响。
translated by 谷歌翻译
大语言模型的兴起的一个关注点是它们可能造成重大伤害的潜力,尤其是在偏见,淫秽,版权和私人信息方面进行预处理。新兴的道德方法试图过滤预处理的材料,但是这种方法是临时的,未能考虑到上下文。我们提供了一种以法律为基础的过滤方法,该方法直接解决了过滤材料的权衡。首先,我们收集并提供了一堆法律,这是一个256GB(以及增长)的开源英语法律和行政数据数据集,涵盖法院意见,合同,行政规则和立法记录。对一堆法律进行预处理可能有助于解决有望改善司法接触的法律任务。其次,我们提炼政府已制定的法律规范将有毒或私人内容限制为可行的研究人员,并讨论我们的数据集如何反映这些规范。第三,我们展示了一堆法律如何为研究人员提供直接从数据中学习此类过滤规则的机会,从而为基于模型的处理提供了令人兴奋的新研究方向。
translated by 谷歌翻译
尽管电子健康记录是生物医学研究的丰富数据来源,但这些系统并未在医疗环境中统一地实施,并且由于医疗保健碎片化和孤立的电子健康记录之间缺乏互操作性,可能缺少大量数据。考虑到缺少数据的案例的删除可能会在随后的分析中引起严重的偏见,因此,一些作者更喜欢采用多重插补策略来恢复缺失的信息。不幸的是,尽管几项文献作品已经通过使用现在可以自由研究的任何不同的多个归档算法记录了有希望的结果,但尚无共识,MI算法效果最好。除了选择MI策略之外,归纳算法及其应用程序设置的选择也至关重要且具有挑战性。在本文中,受鲁宾和范布伦的开创性作品的启发,我们提出了一个方法学框架,可以应用于评估和比较多种多个插补技术,旨在选择用于计算临床研究工作中最有效的推断。我们的框架已被应用于验证和扩展较大的队列,这是我们在先前的文献研究中提出的结果,我们在其中评估了关键患者的描述符和Covid-19的影响在2型糖尿病患者中的影响,其数据为2型糖尿病,其数据为2型糖尿病由国家共同队列合作飞地提供。
translated by 谷歌翻译